skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Xu, Heng"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. A key challenge facing the use of machine learning (ML) in organizational selection settings (e.g., the processing of loan or job applications) is the potential bias against (racial and gender) minorities. To address this challenge, a rich literature of Fairness-Aware ML (FAML) algorithms has emerged, attempting to ameliorate biases while maintaining the predictive accuracy of ML algorithms. Almost all existing FAML algorithms define their optimization goals according to a selection task, meaning that ML outputs are assumed to be the final selection outcome. In practice, though, ML outputs are rarely used as-is. In personnel selection, for example, ML often serves a support role to human resource managers, allowing them to more easily exclude unqualified applicants. This effectively assigns to ML a screening rather than a selection task. It might be tempting to treat selection and screening as two variations of the same task that differ only quantitatively on the admission rate. This paper, however, reveals a qualitative difference between the two in terms of fairness. Specifically, we demonstrate through conceptual development and mathematical analysis that miscategorizing a screening task as a selection one could not only degrade final selection quality but also result in fairness problems such as selection biases within the minority group. After validating our findings with experimental studies on simulated and real-world data, we discuss several business and policy implications, highlighting the need for firms and policymakers to properly categorize the task assigned to ML in assessing and correcting algorithmic biases. 
    more » « less
  2. Over the past two decades, behavioral research in privacy has made considerable progress transitioning from acontextual studies to using contextualization as a powerful sensitizing device for illuminating the boundary conditions of privacy theories. Significant challenges and opportunities wait, however, on elevating and converging individually contextualized studies to a context-contingent theory that explicates the mechanisms through which contexts influence consumers’ privacy concerns and their behavioral reactions. This paper identifies the important barriers occasioned by this lack of context theorizing on the generalizability of privacy research findings and argues for accelerating the transition from the contextualization of individual research studies to an integrative understanding of context effects on privacy concerns. It also takes a first step toward this goal by providing a conceptual framework and the associated methodological instantiation for assessing how context-oriented nuances influence privacy concerns. Empirical evidence demonstrates the value of the framework as a diagnostic device guiding the selection of contextual contingencies in future research, so as to advance the pace of convergence toward context-contingent theories in information privacy. This paper was accepted by Anindya Ghose, information systems. 
    more » « less
  3. Research and practical development of data-anonymization techniques have proliferated in recent years. Yet, limited attention has been paid to examine the potentially disparate impact of privacy protection on underprivileged subpopulations. This study is one of the first attempts to examine the extent to which data anonymization could mask the gross statistical disparities between subpopulations in the data. We first describe two common mechanisms of data anonymization and two prevalent types of statistical evidence for disparity. Then, we develop conceptual foundation and mathematical formalism demonstrating that the two data-anonymization mechanisms have distinctive impacts on the identifiability of disparity, which also varies based on its statistical operationalization. After validating our findings with empirical evidence, we discuss the business and policy implications, highlighting the need for firms and policy makers to balance between the protection of privacy and the recognition/rectification of disparate impact. This paper was accepted by Chris Forman, information systems. 
    more » « less
  4. Too much of a good thing can be harmful. Choice overload, a compelling paradox in consumer psychology, exemplifies this notion with the idea that offering more product options could impede rather than improve consumer satisfaction, even when consumers are free to ignore any available option. After attracting intense interest in the past decades from multiple disciplines, research on choice overload has produced voluminous yet paradoxical findings that are widely perceived as inconsistent even at the meta-analytic level. This paper launches an interdisciplinary inquiry to resolve the inconsistencies on both the conceptual and empirical fronts. Specifically, we identified a surprising butrobust pattern among the existing empirical evidence for the choiceoverload effect and demonstrated through mathematical analysis and extensive simulation studies that the pattern would only likely emerge from one specific type of latent mechanism underlying the moderated choiceoverload effect. The paper discusses the research and practical implications of our findings—namely, the broad promise of analytical meta-analysis (an emerging area for the use of data analytics) and machine learning to address the widely recognized inconsistencies in social and behavioral sciences, and the unique and salient role of the information systems community in developing this new era of meta-analysis. 
    more » « less
  5. null (Ed.)